ci: enable repo Semgrep rules for issue #338#354
Conversation
- Enable project-owned Semgrep rules in local checks, CodeRabbit, and Codacy/OpenGrep scanning. - Harden Semgrep execution with strict mode, a higher timeout, and fixture coverage for hot-path hash collection rules. - Replace flagged diagnostics and silent numeric fallbacks with explicit tracing, expectations, and typed Hilbert quantization errors. - Centralize Delaunay triangulation cache invalidation through the existing repair-cache helper.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughEnable repository-owned Semgrep rules and wire them into CI (Codacy/CodeRabbit), add semgrep rule set and fixtures, re-enable Semgrep in developer recipes, tighten CI workflow pins and SARIF outputs, make Hilbert quantization fallible with explicit errors and refactor sort/index APIs, replace builder-based Vertex construction with direct factories, and standardize Clippy expectation metadata across tests/benches. Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Matrix as Job Matrix
participant CodacyCLI as Codacy Analysis CLI
participant CodeRabbit as CodeRabbit Review
participant SARIF as SARIF Storage
GH->>Matrix: workflow triggered (quality scan)
Matrix->>CodacyCLI: run `codacy-analysis-cli --tool=${{matrix.tool}}`
CodacyCLI-->>Matrix: emit `codacy-scan-${{matrix.tool}}.sarif`
Matrix->>SARIF: upload per-tool SARIF files
GH->>CodeRabbit: CodeRabbit Semgrep run via `.coderabbit.yml` (semgrep.yaml)
CodeRabbit->>SARIF: produce/upload SARIF / findings
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 91 |
🟢 Coverage 86.39% diff coverage · +0.13% coverage variation
Metric Results Coverage variation ✅ +0.13% coverage variation (-1.00%) Diff coverage ✅ 86.39% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (a244053) 51492 46189 89.70% Head commit (fb99dbb) 51562 (+70) 46321 (+132) 89.84% (+0.13%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#354) 676 584 86.39% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/util/hilbert.rs (2)
553-578:⚠️ Potential issue | 🟡 Minor
hilbert_sorted_indicesis missing the new quantization conversion error in docs.Line 577 now calls
hilbert_quantization_scale::<T>(bits)?, so this function can returnHilbertError::QuantizationScaleConversionFailed, but its# Errorssection doesn’t mention it.📝 Proposed doc fix
/// Returns [`HilbertError::DimensionTooLarge`] if the dimension `D` exceeds `u32::MAX` /// (extremely unlikely in practice). /// +/// Returns [`HilbertError::QuantizationScaleConversionFailed`] if the quantization +/// grid maximum cannot be represented by the coordinate scalar type. +/// /// # Examples🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/util/hilbert.rs` around lines 553 - 578, Update the hilbert_sorted_indices documentation to list the new possible error from hilbert_quantization_scale by adding HilbertError::QuantizationScaleConversionFailed to the `# Errors` section; locate the function hilbert_sorted_indices and its docs, and mention that calling hilbert_quantization_scale::<T>(bits) can return QuantizationScaleConversionFailed so it should be documented alongside the existing InvalidBitsParameter, DimensionTooLarge, and IndexOverflow errors.
15-16:⚠️ Potential issue | 🔴 CriticalAdd
#[non_exhaustive]toHilbertErrorbefore extending variants.A new public variant was added, but the enum itself is still exhaustively matchable by downstream users. Mark
HilbertErroras non-exhaustive to ensure future additive variants remain non-breaking.Additionally,
hilbert_sorted_indicescallshilbert_quantization_scale::<T>(bits)?at line 577, which can returnHilbertError::QuantizationScaleConversionFailed, but this error is not documented in the function's error section (lines 534–545).♻️ Proposed fix
#[derive(Clone, Debug, thiserror::Error, PartialEq, Eq)] +#[non_exhaustive] pub enum HilbertError {And update
hilbert_sorted_indicesdocs to include:Returns [`HilbertError::DimensionTooLarge`] if the dimension `D` exceeds `u32::MAX` (extremely unlikely in practice). + +Returns [`HilbertError::QuantizationScaleConversionFailed`] if the computed grid maximum +cannot be represented in the coordinate scalar type `T`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/util/hilbert.rs` around lines 15 - 16, Add the #[non_exhaustive] attribute to the HilbertError enum declaration (the enum named HilbertError with #[derive(...)] ) so downstream code cannot exhaustively match it; also update the hilbert_sorted_indices function documentation to list HilbertError::QuantizationScaleConversionFailed as a possible returned error (since hilbert_sorted_indices calls hilbert_quantization_scale::<T>(bits)? which can return that variant). Ensure both changes reference the existing symbols HilbertError, hilbert_sorted_indices, and hilbert_quantization_scale::<T>(bits) so reviewers can locate the edits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.coderabbit.yml:
- Around line 109-112: Add the repository-owned Semgrep config path under the
existing semgrep section so CodeRabbit loads .semgrep.yaml: in the
.coderabbit.yml semgrep block (the keys "semgrep" and "enabled") add a
"config_file: .semgrep.yaml" entry alongside enabled: true to explicitly point
CodeRabbit to the repository's Semgrep rules file.
In @.semgrep.yaml:
- Around line 185-199: Update the semgrep rule id
"delaunay.rust.no-production-unwrap-panic" to also match .expect(...) calls so
they cannot bypass the check: in the patterns.pattern-either array add a pattern
like "$VALUE.expect(...)" (mirroring the existing "$VALUE.unwrap()" entry) and
ensure the rule still applies to the same paths; this will catch production uses
such as the .expect("point was set before building vertex") in
src/core/vertex.rs::from_points and force returning typed errors instead of
panicking.
In `@justfile`:
- Around line 587-588: The semgrep-test target is running semgrep against a
fixture in tests/semgrep/... which doesn't match the rule's include pattern
(/src/core/algorithms/**), so the rule never runs; update the semgrep-test
target (symbol: semgrep-test) to ensure the fixture path is in-scope by either
running uv/semgrep from the tests/semgrep directory, adding the fixture path to
the scan arguments, or expanding the rule's include pattern (rule id:
delaunay.rust.no-std-hash-collections-in-hot-src) to also match
tests/semgrep/src/core/algorithms/** so the test actually exercises the rule.
In `@src/core/util/hilbert.rs`:
- Around line 456-457: The documentation for hilbert_indices_prequantized
incorrectly mentions returning HilbertError::QuantizationScaleConversionFailed
even though the function accepts pre-quantized u32 coordinates and never
performs any scalar/scale conversion; update the doc comment to remove that
error from the returns/errors list (or replace it with the correct possible
errors if any), and ensure any sentence that references "quantization grid
maximum cannot be represented by the coordinate scalar type" is deleted or
reworded to accurately reflect that hilbert_indices_prequantized operates on u32
inputs and does not perform scalar conversion.
In `@src/core/vertex.rs`:
- Around line 599-604: The public method from_points currently maps Points to
Vertices by calling VertexBuilder::default().point(*p).build().expect(...),
which can panic if build() gains error variants; replace this fragile path with
an infallible conversion: add or use an explicit constructor (e.g.,
Vertex::from_point or Vertex::new_from_point) that creates a Vertex directly
from a Point without using the fallible VertexBuilder::build(), then change
from_points to map each Point to that constructor (remove expect/unwrap). Ensure
the new constructor lives alongside Vertex/impl and is used by from_points so
the public API no longer can panic on valid input.
---
Outside diff comments:
In `@src/core/util/hilbert.rs`:
- Around line 553-578: Update the hilbert_sorted_indices documentation to list
the new possible error from hilbert_quantization_scale by adding
HilbertError::QuantizationScaleConversionFailed to the `# Errors` section;
locate the function hilbert_sorted_indices and its docs, and mention that
calling hilbert_quantization_scale::<T>(bits) can return
QuantizationScaleConversionFailed so it should be documented alongside the
existing InvalidBitsParameter, DimensionTooLarge, and IndexOverflow errors.
- Around line 15-16: Add the #[non_exhaustive] attribute to the HilbertError
enum declaration (the enum named HilbertError with #[derive(...)] ) so
downstream code cannot exhaustively match it; also update the
hilbert_sorted_indices function documentation to list
HilbertError::QuantizationScaleConversionFailed as a possible returned error
(since hilbert_sorted_indices calls hilbert_quantization_scale::<T>(bits)? which
can return that variant). Ensure both changes reference the existing symbols
HilbertError, hilbert_sorted_indices, and hilbert_quantization_scale::<T>(bits)
so reviewers can locate the edits.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 07fc940f-21d4-4786-abe1-56c750a39d61
📒 Files selected for processing (11)
.codacy.yml.coderabbit.yml.github/workflows/codacy.yml.semgrep.yamljustfilesrc/core/algorithms/pl_manifold_repair.rssrc/core/util/hilbert.rssrc/core/vertex.rssrc/geometry/util/measures.rssrc/triangulation/delaunay.rstests/semgrep/src/core/algorithms/no_std_hash_collections.rs
| # Semantic code analysis uses only the focused, repository-owned rules in | ||
| # .semgrep.yaml. Default Semgrep packs remain disabled in CodeRabbit. | ||
| semgrep: | ||
| enabled: false | ||
| enabled: true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name ".semgrep.yaml" -o -name "semgrep.yaml" -o -name "semgrep.yml" -o -name "semgrep.config.yml" -o -name "semgrep.config.yaml" 2>/dev/null | head -20Repository: acgetchell/delaunay
Length of output: 79
🌐 Web query:
CodeRabbit Semgrep config file auto-discovery behavior documentation
💡 Result:
CodeRabbit's Semgrep config file auto-discovery behavior is documented in the official docs at https://docs.coderabbit.ai/tools/semgrep. By default, CodeRabbit automatically uses Semgrep config files named semgrep.yml, semgrep.yaml, semgrep.config.yml, or semgrep.config.yaml if present in the root directory of the repository. Semgrep only runs if a config file is found using these default names, or if a custom path is explicitly specified via: - reviews.tools.semgrep.config_file in the repository's .coderabbit.yaml file, or - The "Reviews → Tools → Semgrep → Config File" field in CodeRabbit's web settings UI. Due to licensing restrictions, CodeRabbit does not include community Semgrep rules and requires a user-provided config with rules. Note that the related OpenGrep tool (Semgrep-compatible) has a broader fallback search order including Semgrep files, starting from repo root and PR-changed files, then broader search—but Semgrep itself sticks to root defaults or explicit paths.
Citations:
- 1: https://docs.coderabbit.ai/tools/semgrep
- 2: https://docs.coderabbit.ai/tools/semgrep/
- 3: https://docs.coderabbit.ai/tools/opengrep
- 4: https://docs.coderabbit.ai/changelog/security-and-sast-remediation-improvements
Add config_file: .semgrep.yaml to enable Semgrep with repository-owned rules.
CodeRabbit auto-discovers only semgrep.yml, semgrep.yaml, semgrep.config.yml, and semgrep.config.yaml. This repository uses .semgrep.yaml, which falls outside the auto-discovery pattern. Setting enabled: true alone will not load your repository-owned rules. Explicitly specify the config file path to enable Semgrep properly.
Suggested fix
semgrep:
enabled: true
+ config_file: .semgrep.yaml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Semantic code analysis uses only the focused, repository-owned rules in | |
| # .semgrep.yaml. Default Semgrep packs remain disabled in CodeRabbit. | |
| semgrep: | |
| enabled: false | |
| enabled: true | |
| # Semantic code analysis uses only the focused, repository-owned rules in | |
| # .semgrep.yaml. Default Semgrep packs remain disabled in CodeRabbit. | |
| semgrep: | |
| enabled: true | |
| config_file: .semgrep.yaml |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.coderabbit.yml around lines 109 - 112, Add the repository-owned Semgrep
config path under the existing semgrep section so CodeRabbit loads
.semgrep.yaml: in the .coderabbit.yml semgrep block (the keys "semgrep" and
"enabled") add a "config_file: .semgrep.yaml" entry alongside enabled: true to
explicitly point CodeRabbit to the repository's Semgrep rules file.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #354 +/- ##
==========================================
+ Coverage 89.68% 89.81% +0.13%
==========================================
Files 58 58
Lines 51297 51367 +70
==========================================
+ Hits 46005 46137 +132
+ Misses 5292 5230 -62
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
- Rename the Semgrep config to semgrep.yaml and wire it into local checks, CodeRabbit, and Codacy/OpenGrep. - Add strict Semgrep execution plus fixture coverage for hot-path hash collections and targeted panic bypasses. - Make Hilbert errors non-exhaustive and document quantization-scale conversion failures on APIs that can return them. - Replace fragile VertexBuilder expect paths with infallible Vertex point constructors.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@semgrep.yaml`:
- Around line 36-54: The listed pattern-not and pattern-not-regex exclusions
(e.g., entries matching "$TYPE::try_from($VALUE).expect(...)",
"$VALUE.to_f64().expect(...)", "NonZeroUsize::new($VALUE).expect(...)",
"build_facet_to_cells_map().expect(...)", and the expect("...") message
exclusions) are ineffective because they can never share the same AST position
as the println!/eprintln! matches; remove those pattern-not and
pattern-not-regex clauses from semgrep.yaml so the rule only contains meaningful
exclusions (keep any other valid filters but delete the shown expect-related
pattern-not and pattern-not-regex entries).
In `@src/geometry/util/triangulation_generation.rs`:
- Around line 100-101: The public docs' "# Panics" section is out of date
because Vertex construction is now infallible via Vertex::from_point and
Vertex::from_point_with_data; update the documentation for the affected
function(s) in triangulation_generation.rs to remove or revise any panic claims
and instead state that vertex construction is infallible (or document the real,
non-panicking conditions, if any). Locate references to Vertex::from_point and
Vertex::from_point_with_data in the function docs and replace the panic contract
with the current behavior (e.g., no panics on user input; describe returned
values/side-effects) to comply with the Result/No-panic guideline.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: bea0bfae-c950-4134-8d64-0af0990d9ec6
📒 Files selected for processing (11)
.codacy.yml.coderabbit.yml.github/workflows/codacy.ymldocs/code_organization.mdjustfilesemgrep.yamlsrc/core/util/hilbert.rssrc/core/vertex.rssrc/geometry/util/triangulation_generation.rssrc/triangulation/builder.rstests/semgrep/src/core/algorithms/no_std_hash_collections.rs
✅ Files skipped from review due to trivial changes (1)
- docs/code_organization.md
🚧 Files skipped from review as they are similar to previous changes (5)
- .coderabbit.yml
- .codacy.yml
- tests/semgrep/src/core/algorithms/no_std_hash_collections.rs
- src/core/vertex.rs
- .github/workflows/codacy.yml
- Add project-specific Semgrep checks for Rust dynamic errors, lint suppression reasons, Python subprocess mocks, and typed script helpers. - Add focused Semgrep fixtures for hot-path hash collections, Rust project rules, and Python test conventions. - Wire the expanded Semgrep fixture suite into `just check`. - Replace stale Clippy `allow` suppressions with documented `expect` attributes and remove dynamic error trait-object usage from tests.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/geometry/algorithms/convex_hull.rs`:
- Around line 1904-1908: Replace unconditional println! calls in the test blocks
in convex_hull.rs (and the other listed ranges) with tracing macros and gate
them behind the test-debug feature: change println! to tracing::debug! or
tracing::info!, and wrap the diagnostics so they only run when the test-debug
feature is enabled (either via an if cfg!(feature = "test-debug") { ... } guard
or by putting the diagnostic statement inside a #[cfg(feature = "test-debug")]
block). Update imports to use tracing::{debug,info} where needed and ensure the
tests retain the same messages but no longer print unconditionally to stdout.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: d0225ca8-dc50-4599-b726-f578cbf8c783
📒 Files selected for processing (24)
benches/ci_performance_suite.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsjustfilesemgrep.yamlsrc/core/algorithms/flips.rssrc/core/boundary.rssrc/core/facet.rssrc/core/traits/facet_cache.rssrc/core/util/facet_keys.rssrc/geometry/algorithms/convex_hull.rssrc/geometry/point.rssrc/geometry/predicates.rssrc/geometry/quality.rssrc/geometry/robust_predicates.rssrc/geometry/traits/coordinate.rssrc/geometry/util/triangulation_generation.rssrc/triangulation/delaunay.rstests/coordinate_conversion_errors.rstests/delaunay_edge_cases.rstests/proptest_sos.rstests/semgrep/scripts/tests/python_exceptions.pytests/semgrep/src/core/algorithms/no_std_hash_collections.rstests/semgrep/src/project_rules/rust_style.rs
✅ Files skipped from review due to trivial changes (13)
- tests/proptest_sos.rs
- src/core/boundary.rs
- src/core/util/facet_keys.rs
- tests/delaunay_edge_cases.rs
- tests/coordinate_conversion_errors.rs
- src/geometry/robust_predicates.rs
- src/geometry/predicates.rs
- src/core/traits/facet_cache.rs
- src/core/facet.rs
- benches/profiling_suite.rs
- benches/large_scale_performance.rs
- src/geometry/traits/coordinate.rs
- src/geometry/point.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/semgrep/src/core/algorithms/no_std_hash_collections.rs
- semgrep.yaml
- src/geometry/util/triangulation_generation.rs
| #[expect( | ||
| clippy::too_many_lines, | ||
| clippy::cognitive_complexity, | ||
| reason = "test keeps dimension-specific visibility cases together" | ||
| )] |
There was a problem hiding this comment.
Replace unconditional println! diagnostics in touched tests with feature-gated tracing.
These test blocks still emit unconditional stdout, which conflicts with repository diagnostics policy and can reintroduce static-analysis noise in src/**/*.rs.
Suggested pattern
- println!("Testing comprehensive visibility algorithms in dimensions 2D-5D");
+ #[cfg(feature = "test-debug")]
+ tracing::info!("Testing comprehensive visibility algorithms in dimensions 2D-5D");- println!(" Testing 2D visibility algorithms...");
+ #[cfg(feature = "test-debug")]
+ tracing::debug!("Testing 2D visibility algorithms...");As per coding guidelines, "Use tracing::{debug,info,warn,error}! for committed diagnostics across production code, tests, and benchmarks..." and "Gate non-essential test/benchmark diagnostics behind feature flags using test-debug for test diagnostics and bench-logging for benchmark diagnostics in Rust code."
Also applies to: 2194-2197, 2475-2478, 3809-3812, 4859-4862, 5199-5202, 5402-5405, 5564-5567, 5696-5699, 5940-5943
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/geometry/algorithms/convex_hull.rs` around lines 1904 - 1908, Replace
unconditional println! calls in the test blocks in convex_hull.rs (and the other
listed ranges) with tracing macros and gate them behind the test-debug feature:
change println! to tracing::debug! or tracing::info!, and wrap the diagnostics
so they only run when the test-debug feature is enabled (either via an if
cfg!(feature = "test-debug") { ... } guard or by putting the diagnostic
statement inside a #[cfg(feature = "test-debug")] block). Update imports to use
tracing::{debug,info} where needed and ensure the tests retain the same messages
but no longer print unconditionally to stdout.
- Pin GitHub workflow tool versions and update action SHAs for cache, artifact upload, install-action, and SARIF upload. - Exclude Semgrep fixtures from Codacy analysis so intentional rule-test violations do not surface as production issues. - Add a cargo-machete backed just unused-deps recipe for checking unused direct dependencies. - Gate convex hull test diagnostics behind test-debug tracing instead of unconditional stdout output. - Add Hilbert ordering and zero-dimensional sort coverage for Codecov patch gaps.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/core/util/hilbert.rs (1)
636-661: ⚡ Quick winAdd a regression test for the new quantization error path.
The added tests cover ordering and
D == 0, but this PR also introduces a new publicQuantizationScaleConversionFailedbranch. Please pin at least one failing case through a public API so the old silent-fallback behavior cannot reappear unnoticed.Based on learnings: "Unit tests must cover known values, error paths, and dimension-generic correctness for Rust".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/util/hilbert.rs` around lines 636 - 661, Add a regression test that exercises the new QuantizationScaleConversionFailed error via the public API (e.g., call hilbert_sorted_indices and/or hilbert_sort_by_unstable/hilbert_sort_by_stable) and assert the call returns an Err variant matching QuantizationScaleConversionFailed; construct input bounds/coordinates known to trigger the conversion failure (use the public functions hilbert_sorted_indices, hilbert_sort_by_unstable, or hilbert_sort_by_stable and check the returned Result), and add the test alongside the existing hilbert tests so the error path is pinned by CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/util/hilbert.rs`:
- Around line 141-145: The code silently maps failed scaled.round().to_u32()
conversions to 0 (in the quantized assignment inside
hilbert_quantize_with_scale), which can corrupt Hilbert ordering; change this to
return a typed error instead of defaulting to 0 (propagate a
QuantizationScaleConversionFailed-like error from hilbert_quantize_with_scale)
or replace the map_or(0, ...) with an expect/assert that documents why a failure
is impossible; update the function signature to propagate the error (or add an
explicit panic message) and adjust callers of hilbert_quantize_with_scale
accordingly so conversion failures are not silently collapsed to zero.
---
Nitpick comments:
In `@src/core/util/hilbert.rs`:
- Around line 636-661: Add a regression test that exercises the new
QuantizationScaleConversionFailed error via the public API (e.g., call
hilbert_sorted_indices and/or hilbert_sort_by_unstable/hilbert_sort_by_stable)
and assert the call returns an Err variant matching
QuantizationScaleConversionFailed; construct input bounds/coordinates known to
trigger the conversion failure (use the public functions hilbert_sorted_indices,
hilbert_sort_by_unstable, or hilbert_sort_by_stable and check the returned
Result), and add the test alongside the existing hilbert tests so the error path
is pinned by CI.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 3987cff9-56e8-41d4-a0e1-8a557e20849d
📒 Files selected for processing (16)
.codacy.yml.github/workflows/audit.yml.github/workflows/benchmarks.yml.github/workflows/ci.yml.github/workflows/codacy.yml.github/workflows/codecov.yml.github/workflows/codeql.yml.github/workflows/generate-baseline.yml.github/workflows/profiling-benchmarks.yml.github/workflows/rust-clippy.yml.gitignorejustfilesemgrep.yamlsrc/core/util/hilbert.rssrc/geometry/algorithms/convex_hull.rssrc/triangulation/delaunay.rs
💤 Files with no reviewable changes (1)
- .gitignore
✅ Files skipped from review due to trivial changes (3)
- .github/workflows/generate-baseline.yml
- .github/workflows/codeql.yml
- .github/workflows/profiling-benchmarks.yml
🚧 Files skipped from review as they are similar to previous changes (2)
- .codacy.yml
- semgrep.yaml
- Return typed Hilbert errors for non-finite quantization inputs and failed u32 coordinate conversions instead of silently collapsing values. - Preserve item order when Hilbert sort key construction fails, and add regression coverage for the new error paths. - Add the focused ordering prelude and update doctests, examples, benchmarks, and integration tests to use orthogonal prelude imports. - Add a Semgrep rule and fixture coverage for examples and benchmarks that bypass focused preludes. - Verify pinned shfmt binaries in CI with explicit SHA256 values instead of downloading a missing upstream checksum file.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/vertex.rs`:
- Around line 565-566: The constructor docs incorrectly claim an infallible
guarantee because `Point` is "already validated"; update the doc comment for the
`Vertex` constructor that accepts a `Point` (the `From<Point>`/`Vertex::from`
constructor) to remove the implication that `Point` holds only valid/finite
coordinates and instead state that the constructor is infallible because it does
not perform additional validation (so callers must ensure coordinates are
finite/validated if required); apply the same wording fix to the other doc block
mentioned around the second occurrence (lines noted in the review).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 5ff5b4ae-3bc5-44b3-9eff-04a93547656e
📒 Files selected for processing (17)
.github/workflows/ci.ymlbenches/ci_performance_suite.rsbenches/circumsphere_containment.rsbenches/cold_path_predicates.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsbenches/topology_guarantee_construction.rsexamples/delaunayize_repair.rsexamples/topology_editing_2d_3d.rsexamples/triangulation_3d_100_points.rssemgrep.yamlsrc/core/util/hilbert.rssrc/core/vertex.rssrc/lib.rstests/prelude_exports.rstests/regressions.rstests/semgrep/src/project_rules/rust_style.rs
✅ Files skipped from review due to trivial changes (8)
- examples/delaunayize_repair.rs
- benches/cold_path_predicates.rs
- benches/circumsphere_containment.rs
- benches/topology_guarantee_construction.rs
- tests/regressions.rs
- benches/profiling_suite.rs
- semgrep.yaml
- tests/semgrep/src/project_rules/rust_style.rs
| /// This constructor is infallible because [`Point`] already owns validated | ||
| /// coordinates for the vertex dimension. |
There was a problem hiding this comment.
Fix the validation guarantee wording in constructor docs.
The docs currently imply Point is already validated, but this type can still hold non-finite coordinates until explicit validation is run. Please reword to avoid overstating guarantees.
Suggested doc tweak
- /// This constructor is infallible because [`Point`] already owns validated
- /// coordinates for the vertex dimension.
+ /// This constructor is infallible because it performs no additional
+ /// validation; call [`Vertex::is_valid`] when coordinate validity matters.
@@
- /// This constructor is infallible because [`Point`] already owns validated
- /// coordinates for the vertex dimension and `data` is stored as provided.
+ /// This constructor is infallible because it performs no additional
+ /// validation, and `data` is stored as provided.Also applies to: 593-594
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/vertex.rs` around lines 565 - 566, The constructor docs incorrectly
claim an infallible guarantee because `Point` is "already validated"; update the
doc comment for the `Vertex` constructor that accepts a `Point` (the
`From<Point>`/`Vertex::from` constructor) to remove the implication that `Point`
holds only valid/finite coordinates and instead state that the constructor is
infallible because it does not perform additional validation (so callers must
ensure coordinates are finite/validated if required); apply the same wording fix
to the other doc block mentioned around the second occurrence (lines noted in
the review).
There was a problem hiding this comment.
Really? You held up a merge over a comment?
There was a problem hiding this comment.
@acgetchell, apologies for the frustration — that was a Minor suggestion and was absolutely not intended to block a merge. Please don't let wording-only doc comments slow you down. Feel free to merge and address (or ignore) it at your discretion.
( ´•ω•` )っ🐇💨
🧠 Learnings used
Learnt from: CR
Repo: acgetchell/delaunay PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-25T04:34:17.766Z
Learning: Applies to **/*.rs : State invariants mathematically where possible in doc comments (e.g., χ(S^d) = 1 + (−1)^d) rather than prose-only
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 261
File: src/core/algorithms/incremental_insertion.rs:296-304
Timestamp: 2026-03-14T14:02:19.451Z
Learning: In `src/core/triangulation.rs`, the perturbation retry mechanism (around line 3177) uses `InsertionError::is_retryable()` only after unconditionally rolling back `self.tds = tds_snapshot` (line 3166). This means `IsolatedVertex` being classified as retryable in `is_tds_error_retryable()` is correct and intentional: the TDS is always restored to a clean snapshot before retrying, so the post-mutation origin of `IsolatedVertex` does not create unsoundness. Do not flag `IsolatedVertex` as incorrectly retryable.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 265
File: src/core/triangulation.rs:447-451
Timestamp: 2026-03-15T16:33:29.981Z
Learning: Repo acgetchell/delaunay (Rust): In src/core/triangulation.rs, impl From<ManifoldError> for TriangulationValidationError intentionally uses unreachable! for ManifoldError::Tds because TriangulationValidationError is Level 3–only and the companion impl From<ManifoldError> for InvariantError handles the Tds variant first. Do not flag this unreachable! as a panic path; it encodes a proven internal routing invariant.
Learnt from: CR
Repo: acgetchell/delaunay PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-25T04:34:17.766Z
Learning: Applies to src/**/*.rs : Repair paths (repair_delaunay_with_flips, repair_facet_oversharing, delaunayize_by_flips) must bound work via explicit budgets (max_flips, max_iterations, max_cells_removed) and surface non-convergence as typed error, never by logging and proceeding in Rust
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 300
File: src/core/triangulation_data_structure.rs:5853-5857
Timestamp: 2026-04-01T22:01:55.156Z
Learning: In this Rust repo, the Vertex type does not provide degree() or incident_cells() methods. When writing code changes or making review suggestions, avoid proposing or using those APIs. If you need the vertex’s “star size” in tests, compute it via the triangulation data structure, e.g. dt.as_triangulation().tds.find_cells_containing_vertex(vertex_key).len().
Closes #338